Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Acceleration and optimization of quantum computing simulator implemented on new Sunway supercomputer
Xinmin SHI, Yong LIU, Yaojian CHEN, Jiawei SONG, Xin LIU
Journal of Computer Applications    2023, 43 (8): 2486-2492.   DOI: 10.11772/j.issn.1001-9081.2022091456
Abstract431)   HTML59)    PDF (2000KB)(443)       Save

Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.

Table and Figures | Reference | Related Articles | Metrics
Remora optimization algorithm based on chaotic host switching mechanism
Heming JIA, Shanglong LI, Lizhen CHEN, Qingxin LIU, Di WU, Rong ZHENG
Journal of Computer Applications    2023, 43 (6): 1759-1767.   DOI: 10.11772/j.issn.1001-9081.2022060901
Abstract287)   HTML8)    PDF (1965KB)(181)       Save

The optimization process of Remora Optimization Algorithm (ROA) includes three modes: attaching to host, empirical attack and host foraging, and the exploration ability and exploitation ability of this algorithm are relatively strong. However, because the original algorithm switches the host through empirical attack, it will lead to the poor balance between exploration and exploitation, slow convergence and being easy to fall into local optimum. Aiming at the above problems, a Modified ROA (MROA) based on chaotic host switching mechanism was proposed. Firstly, a new host switching mechanism was designed to better balance the abilities of exploration and exploitation. Then, in order to diversify the initial hosts of remora, Tent chaotic mapping was introduced for population initialization to further optimize the performance of the algorithm. Finally, MROA was compared with six algorithms such as the original ROA and Reptile Search Algorithm (RSA) in the CEC2020 test functions. Through the analysis of the experimental results, it can be seen that the best fitness value, average fitness value and fitness value standard deviation obtained by MROA are better than those obtained by ROA, RSA, Whale Optimization Algorithm (WOA), Harris Hawks Optimization (HHO) algorithm, Sperm Swarm Optimization (SSO) algorithm, Sine Cosine Algorithm (SCA), and Sooty Tern Optimization Algorithm (STOA) by 28%, 33%, and 12% averagely and respectively. The test results based on CEC2020 show that MROA has good optimization ability, convergence ability and robustness. At the same time, the effectiveness of MROA in engineering problems was further verified by solving the design problems of welded beam and multi-plate clutch brake.

Table and Figures | Reference | Related Articles | Metrics
Software quality evaluation method considering decision maker’s psychological behaviors
Yanhao SUN, Wei XU, Tao ZHANG, Ningxin LIU
Journal of Computer Applications    2022, 42 (8): 2528-2533.   DOI: 10.11772/j.issn.1001-9081.2021060999
Abstract248)   HTML2)    PDF (611KB)(56)       Save

Aiming at the lack of consideration of the psychological behaviors of decision makers in software quality evaluation methods, a TOmada de Decisao Interativa e Multicritevio (TODIM) software quality evaluation method based on interval 2-tuple linguistic information was proposed. Firstly, interval 2-tuple linguistic information was used to characterize the evaluation information of experts for software quality. Secondly, the subjective and objective weights of software quality attributes were calculated by subjective weighting method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) respectively. On this basis, the comprehensive weights of software quality attributes were obtained by combined weighting method. Thirdly, in order to better describe the psychological behaviors of experts in the process of software quality evaluation, TODIM was introduced into software quality evaluation. Finally, the method was used to evaluate the software quality of assistant dispatcher terminal in high-speed railway dispatching system. The result shows that the third assistant dispatcher terminal software provided by the railway software supplier has the highest dominance value and its quality is the best. The results of comparing this method with the regret theory and Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE-II) show that the three methods are consistent in the selection of the best quality software, but the overall rankings of the three methods are somewhat different, indicating that the constructed method has strong superiority in describing the interaction between multiple criteria and the psychological behaviors of decision makers.

Table and Figures | Reference | Related Articles | Metrics
Capsule network knowledge graph embedding model based on relational memory
Heng CHEN, Siyi WANG, Zhengguang LI, Guanyu LI, Xin LIU
Journal of Computer Applications    2022, 42 (7): 1985-1992.   DOI: 10.11772/j.issn.1001-9081.2021050764
Abstract390)   HTML22)    PDF (1243KB)(218)       Save

As a semantic knowledge base, Knowledge Graph (KG) uses structured triples to store real-world entities and their internal relationships. In order to infer the missing real triples in the knowledge graph, considering the strong triple representation ability of relational memory network and the powerful feature processing ability of capsule network, a knowledge graph embedding model of capsule network based on relational memory was proposed. First, the encoding embedding vectors were formed through the potential dependencies between encoding entities and relationships and some important information. Then, the embedding vectors were convolved with the filter to generate different feature maps, and the corresponding capsules were recombined. Finally, the connections from the parent capsule to the child capsule was specified through the compression function and dynamic routing, and the confidence coefficient of the current triple was estimated by the inner product score between the child capsule and the weight. Link prediction experimental results show that compared with CapsE model, on the Mean Reciprocal Rank (MRR) and Hit@10 evaluation indicators, the proposed model has the increase of 7.95% and 2.2 percentage points respectively on WN18RR dataset, and on FB15K-237 dataset, the proposed model has the increase of 3.82% and 2 percentage points respectively. Experiments results show that the proposed model can more accurately infer the relationship between the head entity and the tail entity.

Table and Figures | Reference | Related Articles | Metrics
Design and implementation of cloud native massive data storage system based on Kubernetes
Fuxin LIU, Jingwei LI, Yihong WANG, Lin LI
Journal of Computer Applications    2020, 40 (2): 547-552.   DOI: 10.11772/j.issn.1001-9081.2019101732
Abstract739)   HTML20)    PDF (560KB)(568)       Save

Aiming at the sharp increasing of data on the cloud caused by the development and popularization of cloud native technology as well as the bottlenecks of the technology in performance and stability, a Haystack-based storage system was proposed. With the optimization in service discovery, automatic fault tolerance and caching mechanism, the system is more suitable for cloud native business and meets the growing and high-frequent file storage and read/write requirements of the data acquisition, storage and analysis industries. The object storage model used by the system satisfies the massive file storage with high-frequency reads and writes. A simple and unified application interface is provided for business using the storage system, a file caching strategy is applied to improve the resource utilization, and the rich automated tool chain of Kubernetes is adopted to make this storage system easier to deploy, easier to expand, and more stable than other storage systems. Experimental results indicate that the proposed storage system has a certain performance and stability improvement compared with the current mainstream object storage and file systems in the situation of large-scale fragmented data storage with more reads than writes.

Table and Figures | Reference | Related Articles | Metrics
Satellite terminal bursty traffic model and queuing performance analysis
BIE Yuxia ZHAN Zhaxin LIU Haiyan
Journal of Computer Applications    2014, 34 (4): 958-962.   DOI: 10.11772/j.issn.1001-9081.2014.04.0958
Abstract514)      PDF (678KB)(369)       Save

With the increase of application of satellite networks in emergency communication, and continuous growth of satellite terminal service types, the traffic may experience an instant augmentation showing a significant burst, and the data flow on the terminal also presents self-similarity. A method was propsed to generate satellite terminal self-similar traffic flow by using a superposition ON/OFF model with heavy-tailed distribution of time interval. And the effect of input of self-similar traffic flow on the packet loss rate, delay, and delay jitter was discussed, as well as the requirements on effective bandwidth. The relationship between packet loss rate at network terminal, delay, delay jitter and system cache was obtained by simulation, based on which, a method was put forward to reduce delay and decrease packet loss rate, providing theoretical support for efficient information transmission under condition of limited bandwidth and system cache.

Reference | Related Articles | Metrics
Auto-clustering algorithm based on compute unified device architecture and gene expression programming
DU Xin LIU Dagang ZHANG Kaihuo SHEN Yuan ZHAO Kang NI Youcong
Journal of Computer Applications    2013, 33 (07): 1890-1893.   DOI: 10.11772/j.issn.1001-9081.2013.07.1890
Abstract912)      PDF (718KB)(531)       Save
There are two inefficient steps in GEP-Cluster algorithm: one is screening and aggregation of clustering centers and the other is the calculation of distance between data objects and clustering centers. To solve the inefficiency, an auto-clustering algorithm based on Compute Unified Device Architecture (CUDA) and Gene Expression Programming (GEP), named as CGEP-Cluster, was proposed. Specifically, the screening, and aggregation of clustering center step was improved by Gene Read & Compute Machine (GRCM) method, and CUDA was used to parallel the calculation of distance between data objects and clustering centers. The experimental results show that compared with GEP-Cluster algorithm, CGEP-Cluster algorithm can speed up by almost eight times when the scale of data objects is large. CGEP-Cluster can be used to implement automatic clustering with the clustering number unknown and large data object scale.
Reference | Related Articles | Metrics
Image segmentation based on fast converging loopy belief propagation algorithm
Sheng-jun XU Xin LIU Liang ZHAO
Journal of Computer Applications    2011, 31 (08): 2229-2231.   DOI: 10.3724/SP.J.1087.2011.02229
Abstract1739)      PDF (682KB)(754)       Save
Large-scale computing and high mis-classification rate are two disadvantages of Loopy Belief Propagation (LBP) algorithm for image segmentation. A fast image segmentation method based on LBP algorithm was proposed. At first, a local region Gibbs energy model was built up. Then the region messages were propagated by LBP algorithm. In order to improve the running speed for LBP algorithm, an efficient speedup technique was used. At last, the segmentation result was obtained by the Maximum A Posterior (MAP) criterion of local region Gibbs energy. The experimental results show that the proposed algorithm not only obtains more accurate segmentation results, especially to noise or texture image, but also implements more fast.
Reference | Related Articles | Metrics
Anonymous fingerprinting scheme with straight-line extractors
Xin LIU
Journal of Computer Applications    2011, 31 (08): 2187-2191.   DOI: 10.3724/SP.J.1087.2011.02187
Abstract1044)      PDF (939KB)(745)       Save
Until now, fingerprinting scheme based on anonymous group signature construction has not yet been solved. To solve this problem, an anonymous fingerprinting scheme with straight-line extractors was proposed, which incorporated the technique of the Canard-Gouget-Hufschmitt zero-knowledge proof (CANARD S, GOUGET A, HUFSCHMITT E. A handy multi-coupon system. ACNS 2006: Proceedings of the 4th International Conference on Applied Cryptography and Network Security, LNCS 3989. Berlin: Springer-Verlag, 2006: 66-81) of the OR statement, the Chida-Yamamoto batch zero-knowledge proof and verification (CHIDA K, YAMAMOTO G. Batch processing for proofs of partial knowledge and its applications. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2008, E91-A(1): 150-159), and the straight-line extractable commitment scheme of Arita (ARITA S. A straight-line extractable non-malleable commitment scheme. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2007, E90-A(7): 1384-1394). To note that, one of the salient features of the new scheme was supporting concurrent registration, so it was especially suitable to be deployed over the Internet. Moreover, the proposed scheme had straight-line extractors, i.e., the security reduction algorithm did not depend on inefficient rewinding strategy and got tight security reduction. Formal security analysis shows that the proposed scheme achieves all the properties required by anonymous fingerprinting schemes.
Reference | Related Articles | Metrics
Software Reliability Prediction Based on the improved PSO-SVM Model
Xiao-nan ZHANG An-xin LIU Bin LIU Hong-mei ZHANG Xing Qing
Journal of Computer Applications    2011, 31 (07): 1762-1764.   DOI: 10.3724/SP.J.1087.2011.01762
Abstract1945)      PDF (621KB)(778)       Save
The major disadvantages of the current software reliability models were discussed. And then based on analyzing classic PSO-SVM model and the characteristics of software reliability prediction, some measures of the improved PSO-SVM model were proposed and an improved model was established. Lastly, the simulation results show that compared with classic models,the improved model has better prediction precision,better generalization ability and lower dependence on the number of sample, which is more applicable for software reliability prediction.
Reference | Related Articles | Metrics
Representing object role modeling models with Web ontology language description logic axioms
Wen-lin PAN Da-xin LIU
Journal of Computer Applications    2011, 31 (04): 1062-1066.   DOI: 10.3724/SP.J.1087.2011.01062
Abstract1295)      PDF (964KB)(395)       Save
Object Role Modeling (ORM) has been used in ontology engineering to model domain ontology, which needs to represent ORM models in OWL DL axioms to check semantic conflicts and redundancy with DL reasoners, and to publish ORM ontology on the semantic Web. By means of comparing the semantics of ORM model and OWL DL axioms, equivalently model-converting, and introducing new operators and properties, that mapping rules to represent ORM models in OWL DL axioms was proposed. Except a few constraints, most ORM model elements can be represented by OWL DL axioms.
Related Articles | Metrics
Improved algorithm of rough K-means based on relative distance
Ming-chun WANG Wan-sheng TANG Qi JIANG Xin LIU
Journal of Computer Applications   
Abstract1504)      PDF (684KB)(1115)       Save
Two rough K-means algorithms based on absolute distance were discussed in the first place, and then the deficiencies of them were indicated. After that, the rationality of the algorithms was presented when the absolute distance was changed to relative distance, and for the reason, the improved algorithms of rough K-means based on relative distance was given. At last, the feasibility and effectiveness of this algorithm were testified by comparing with random, Iris and text data on clustering effect.
Related Articles | Metrics
Study on indicators selection of recycling enterprise process performance measurement software
Da-Feng XU Qing LI Xin LIU Yu-Liu CHEN
Journal of Computer Applications   
Abstract1305)      PDF (712KB)(657)       Save
Since the current process performance measurement software for recycling enterprise was lack of practical models and appropriate methodology to direct the identification and selection of process performance indicators, this paper firstly brought forward a formal model of recycling enterprise process performance indicators selection based on fuzzy covering set, which highlighted the alignment with enterprise strategy. Then, under the guidance of Balanced Scorecard philosophy, this paper constructed an instantiated model of Recycling Enterprise Process Performance Indicators Selection with Strategic Alignment (SA-REP2IS), which can correlate the strategy of recycling enterprise and the related performance indicators. A corresponding problem-solving strategy for the model was also proposed. Finally, a case study was given to illustrate the application of the above proposed methods.
Related Articles | Metrics
Locally adaptive image denoising based on bivariate shrinkage function
Xin Liu;;
Journal of Computer Applications   
Abstract2798)      PDF (808KB)(996)       Save
To improve the performance of image-denoising methods, a locally adaptive denoising algorithm was presented. The new algorithm assumed the statistical dependence among wavelet coefficients. First, a bivariate probability distribution model was introduced to model the statistics of wavelet coefficients, and corresponding nonlinear threshold function (bivariate shrinkage function) was derived from the model using the Bayesian estimation theory. Secondly, using locally variance estimation, a locally adaptive image-denoising algorithm was presented. Also this algorithm could be applied to the complex wavelet domain. Experimental results and comparision analysis are given to illustrate the effectiveness of this denoising algorithm.
Related Articles | Metrics